首页> 外文OA文献 >Recurrent competitive networks can learn locally excitatory topologies
【2h】

Recurrent competitive networks can learn locally excitatory topologies

机译:循环竞争网络可以学习本地的兴奋性拓扑

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

A common form of neural network consists of spatially arranged neurons, with weighted connections between the units providing both local excitation and long-range or global inhibition. Such networks, known as soft-winner-take-all networks or lateral-inhibition type neural fields, have been shown to exhibit desirable information-processing properties including balancing the influence of compatible inputs, deciding between incompatible inputs, signal restoration from noisy, weak, or overly strong input, and the ability to be used as trainable building blocks in larger networks. However, the local excitatory connections in such a network are typically hand-wired based on a fixed spatial arrangement which is chosen using prior knowledge of the dimensionality of the data to be learned by such a network, and neuroanatomical evidence is stubbornly inconsistent with these wiring schemes. Here we present a learning rule that allows networks with completely random internal connectivity to learn the weighted connections necessary for implementing the “local” excitation used by these networks, where the locality is with respect to the inherent topology of the input received by the network, rather than being based on an arbitrarily prescribed spatial arrangement of the cells in the network. We use the Siegert approximation to leaky integrate-and-fire neurons, obtaining networks with consistently sparse activity, to which we apply standard Hebbian learning with weight normalization, plus homeostatic activity regulation to ensure full network utilization. Our results show that such networks learn appropriate excitatory connections from the input, and do not require these connections to be hand-wired with a fixed topology as they traditionally have been for decades.
机译:神经网络的一种常见形式是由空间排列的神经元组成,单元之间的加权连接既提供局部激励又提供远程或全局抑制。这样的网络被称为软赢家通吃网络或横向抑制型神经场,已经显示出理想的信息处理特性,包括平衡兼容输入的影响,确定不兼容输入之间,从噪声中恢复信号,弱信号等。 ,或输入内容过强,并有能力在大型网络中用作可训练的构建基块。然而,这种网络中的局部兴奋性连接通常是基于固定的空间布置而手工布线的,该固定的空间布置是使用这样的网络要学习的数据的维度的先验知识来选择的,并且神经解剖学证据顽固地与这些布线不一致。计划。在这里,我们提出了一条学习规则,允许具有完全随机内部连接的网络学习实现这些网络使用的“本地”激励所必需的加权连接,​​其中,本地性是相对于网络接收到的输入的固有拓扑而言的,而不是基于网络中小区的任意规定的空间布置。我们使用Siegert逼近来泄漏集成和发射神经元,获得活动持续稀疏的网络,对此我们应用具有权重归一化的标准Hebbian学习以及稳态活动调节以确保充分利用网络。我们的结果表明,这样的网络从输入中学习适当的激励连接,并且不需要像传统上已经使用了几十年的固定拓扑那样手工连接这些连接。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号